The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Weakly supervised semantic segmentation (WSSS) with image-level labels is a challenging task in computer vision. Mainstream approaches follow a multi-stage framework and suffer from high training costs. In this paper, we explore the potential of Contrastive Language-Image Pre-training models (CLIP) to localize different categories with only image-level labels and without any further training. To efficiently generate high-quality segmentation masks from CLIP, we propose a novel framework called CLIP-ES for WSSS. Our framework improves all three stages of WSSS with special designs for CLIP: 1) We introduce the softmax function into GradCAM and exploit the zero-shot ability of CLIP to suppress the confusion caused by non-target classes and backgrounds. Meanwhile, to take full advantage of CLIP, we re-explore text inputs under the WSSS setting and customize two text-driven strategies: sharpness-based prompt selection and synonym fusion. 2) To simplify the stage of CAM refinement, we propose a real-time class-aware attention-based affinity (CAA) module based on the inherent multi-head self-attention (MHSA) in CLIP-ViTs. 3) When training the final segmentation model with the masks generated by CLIP, we introduced a confidence-guided loss (CGL) to mitigate noise and focus on confident regions. Our proposed framework dramatically reduces the cost of training for WSSS and shows the capability of localizing objects in CLIP. Our CLIP-ES achieves SOTA performance on Pascal VOC 2012 and MS COCO 2014 while only taking 10% time of previous methods for the pseudo mask generation. Code is available at https://github.com/linyq2117/CLIP-ES.
translated by 谷歌翻译
Despite the tremendous progress of Masked Autoencoders (MAE) in developing vision tasks such as image and video, exploring MAE in large-scale 3D point clouds remains challenging due to the inherent irregularity. In contrast to previous 3D MAE frameworks, which either design a complex decoder to infer masked information from maintained regions or adopt sophisticated masking strategies, we instead propose a much simpler paradigm. The core idea is to apply a \textbf{G}enerative \textbf{D}ecoder for MAE (GD-MAE) to automatically merges the surrounding context to restore the masked geometric knowledge in a hierarchical fusion manner. In doing so, our approach is free from introducing the heuristic design of decoders and enjoys the flexibility of exploring various masking strategies. The corresponding part costs less than \textbf{12\%} latency compared with conventional methods, while achieving better performance. We demonstrate the efficacy of the proposed method on several large-scale benchmarks: Waymo, KITTI, and ONCE. Consistent improvement on downstream detection tasks illustrates strong robustness and generalization capability. Not only our method reveals state-of-the-art results, but remarkably, we achieve comparable accuracy even with \textbf{20\%} of the labeled data on the Waymo dataset. The code will be released at \url{https://github.com/Nightmare-n/GD-MAE}.
translated by 谷歌翻译
车道检测是自动驾驶中的基本模块之一。在本文中,我们采用了一种仅变压器的方法来进行车道检测,因此,它可以受益于完全视觉变压器的开发,并通过精细的 - 通过精细 - 通过精细 - 通过精细的 - 调整重量在大型数据集上进行全面训练。更重要的是,本文提出了一个名为Priorlane的新颖和一般框架,该框架用于通过引入低成本的局部先验知识来增强完全视觉变压器的分割性能。 PriorLane利用仅编码变压器来融合由预训练的分割模型与先验知识嵌入的功能融合。请注意,知识嵌入对齐(KEA)模块可通过对齐知识嵌入来提高融合性能。我们ZJLAB数据集的广泛实验表明,Prior-Lane以2.82%MIOU优于SOTA LANE检测方法,并且该代码将在以下位置发布:https:// github。 com/vincentqqb/priorlane。
translated by 谷歌翻译
尽管具有明显的区分靶向分布样本的能力,但深度神经网络在检测异常分布数据方面的性能差。为了解决此缺陷,最先进的解决方案选择在离群值的辅助数据集上训练深网。这些辅助离群值的各种培训标准是根据启发式直觉提出的。但是,我们发现这些直观设计的离群训练标准可能会损害分布学习,并最终导致劣等的表现。为此,我们确定了分布不兼容的三个原因:矛盾的梯度,错误的可能性和分布变化。基于我们的新理解,我们通过调整深层模型和损耗函数的顶级设计,提出一种新的分布检测方法。我们的方法通过减少对分布特征的概率特征的干扰来实现分布兼容性。在几个基准上,我们的方法不仅可以实现最新的分布检测性能,而且还提高了分布精度。
translated by 谷歌翻译
两阶段探测器在3D对象检测中已广受欢迎。大多数两阶段的3D检测器都使用网格点,体素电网或第二阶段的ROI特征提取的采样关键点。但是,这种方法在处理不均匀分布和稀疏的室外点方面效率低下。本文在三个方面解决了这个问题。 1)动态点聚集。我们建议补丁搜索以快速在本地区域中为每个3D提案搜索点。然后,将最远的体素采样采样用于均匀采样点。特别是,体素尺寸沿距离变化,以适应点的不均匀分布。 2)Ro-Graph Poling。我们在采样点上构建本地图,以通过迭代消息传递更好地模型上下文信息和地雷关系。 3)视觉功能增强。我们引入了一种简单而有效的融合策略,以补偿具有有限语义提示的稀疏激光雷达点。基于这些模块,我们将图形R-CNN构建为第二阶段,可以将其应用于现有的一阶段检测器,以始终如一地提高检测性能。广泛的实验表明,图R-CNN的表现优于最新的3D检测模型,而Kitti和Waymo Open DataSet的差距很大。我们在Kitti Bev汽车检测排行榜上排名第一。代码将在\ url {https://github.com/nightmare-n/graphrcnn}上找到。
translated by 谷歌翻译
基于3DCNN,ConvlSTM或光流的先前方法在视频显着对象检测(VSOD)方面取得了巨大成功。但是,它们仍然遭受高计算成本或产生的显着图质量较差的困扰。为了解决这些问题,我们设计了一个基于时空存储器(STM)网络,该网络从相邻帧中提取当前帧的有用时间信息作为VSOD的时间分支。此外,以前的方法仅考虑无时间关联的单帧预测。结果,模型可能无法充分关注时间信息。因此,我们最初将框架间的对象运动预测引入VSOD。我们的模型遵循标准编码器 - 编码器体系结构。在编码阶段,我们通过使用电流及其相邻帧的高级功能来生成高级的时间特征。这种方法比基于光流的方法更有效。在解码阶段,我们提出了一种有效的空间和时间分支融合策略。高级特征的语义信息用于融合低级特征中的对象细节,然后逐步获得时空特征以重建显着性图。此外,受图像显着对象检测(ISOD)中常用的边界监督的启发,我们设计了一种运动感知损失,用于预测对象边界运动,并同时对VSOD和对象运动预测执行多任务学习,这可以进一步促进模型以提取提取的模型时空特征准确并保持对象完整性。在几个数据集上进行的广泛实验证明了我们方法的有效性,并且可以在某些数据集上实现最新指标。所提出的模型不需要光流或其他预处理,并且在推理过程中可以达到近100 fps的速度。
translated by 谷歌翻译
视觉变压器(VIT)是卷积神经网络(CNN)的强大替代方案,引起了很多关注。最近的工作表明,VIT也容易受到CNN等对抗性例子的影响。为了建立强大的VIT,一种直观的方法是应用对抗训练,因为它已被证明是完成强大CNN的最有效方法之一。但是,对抗性培训的一个主要局限性是其沉重的计算成本。 VIT所采用的自我注意力的机制是计算强度的操作,其费用随输入贴片的数量四次增加,从而使VIT上的对抗性训练更加耗时。在这项工作中,我们首先全面研究了有关各种视觉变压器的快速对抗训练,并说明了效率和鲁棒性之间的关系。然后,为了加快对VIT的对抗训练,我们提出了一种有效的注意力引导的对抗训练机制。具体而言,依靠自我注意的专长,我们在对抗训练过程中以注意引导策略的掉落策略积极地嵌入了每一层的某些斑块嵌入。纤细的自我发场模块大大加速了对VIT的对抗训练。只有65%的快速对抗训练时间,我们与具有挑战性的成像网基准相匹配。
translated by 谷歌翻译
手语翻译作为一种具有深刻社会意义的技术,近年来吸引了研究人员的利益。但是,现有的标志语言翻译方法需要在开始翻译之前阅读所有视频,这导致高推理延迟,并限制了它们在现实方案中的应用程序。为了解决这个问题,我们提出了SIMULSLT,这是第一端到端同步标志语言翻译模型,可以同时将手语录像机转换为目标文本。 SIMUSLT由文本解码器,边界预测器和屏蔽编码器组成。我们1)使用Wait-K战略同时翻译。 2)基于集成和火灾模块设计一种新的边界预测器,以输出光泽边界,该边界用于模拟手语视频和光泽之间的对应关系。 3)提出了一种创新的重新编码方法来帮助模型获取更丰富的上下文信息,这允许现有的视频功能完全交互。在Rwth-Phoenix-MoreSt 2014T数据集上进行的实验结果表明,SIMUSLT实现了超过最新的端到端非同时标志语言翻译模型的BLEU分数,同时保持低延迟,这证明了我们方法的有效性。
translated by 谷歌翻译
语义角色标签(SRL)旨在识别句子的谓词题目结构,并可以分解为两个子任务:谓词歧义歧义和参数标记。先前的工作独立处理这两个任务,这两个任务忽略了两个任务之间的语义连接。在本文中,我们建议使用机器阅读理解(MRC)框架来弥合这一差距。我们将谓词歧义形式化为多项选择的机器阅读理解,其中给定谓词的候选感官的描述用作选择正确的感觉的选项。然后使用所选的谓词感来确定该谓词的语义角色,这些语义角色用于构建另一个MRC模型的查询以进行参数标记。这样,我们能够利用参数标记的谓词语义和语义角色语义。我们还建议为计算效率选择所有可能的语义角色的子集。实验表明,所提出的框架可实现与先前工作的最新结果或可比的结果。代码可在\ url {https://github.com/shannonai/mrc-srl}上获得。
translated by 谷歌翻译